Ensemble Methods

A. Sanchez, F. Reverter and E. Vegas

Introduction to Ensembles

Some problems of weak learners.

  • Decision trees have many good properties but some important drawbacks:
    • Smaller accuracy than competing alternatives
    • Very sensitive to small changes in data
    • Overall it makes the highly variable predictors
  • Tree are not the only classifers to suffer from such problems.

Ensembles

  • A common strategy to deal with these issues is to build repeated (weak learners) models on the same data and combine them to form a single result.

  • These are called ensemble or consensus estimators/predictors.

  • As a general rule, ensemble learners tend to improve the results obtained with the weak learners they are made of.

Ensemble methods

  • Ensemble can be built on different learners but we will focus on those built on trees:

    • Bagging,
    • Random Forests,
    • Boosting,
    • Bayesian Trees.

Bagging: Aggregating predictors

Bagging: bootstrap aggregation

  • Decision trees suffer from high variance when compared with other methods such as linear regression, especially when \(n/p\) is moderately large.
    • NOTE: Write a small script to check this assertion
  • Given that high variance is intrinsec to the trees a possibility, suggested by Breimann (Breiman 1996), is to build multiple trees derived from the same dataset and, somehow, average them.

Averaging decreases variance

  • Bagging relies, informally, on the idea that:
    • given \(X\sim F()\), s.t. \(Var_F(X)=\sigma^2\),
    • given a s.r.s. \(X_1, ..., X_n\) from \(F\) then
    • if \(\overline{X}=\frac{1}{N}\sum_{i=1}^n X_i\) then \(var_F(\overline{X})=\sigma^2/n\).
  • That is, relying on the sample mean instead of on simple observations decreases variance by a factor of \(n\).

Averaging trees …

Two questions arise here:

  1. How to go from \(X\) to \(X_1, ..., X_n\)?
  • This will be done using bootstrap resampling.
  1. What means “averaging” in this context.
  • Depending on the type of tree:
    • Average predictions for regression trees.
    • Majority voting for classification trees.

The bootstrap

  • Bootstrap methods were introduced by Bradley Efron in 1979 (Efron 1979) to estimate the standard error of a statistic.
  • The success of the idea lied in that the procedure was presented as ``automatic’’, that is:
    • instead of having to do complex calculations,
    • it allowed to approximate them using computer simulation.
  • Some people called it ``the end of mathematical statistics’’.

Bootstrap Applications

  • The bootstrap has been applied to almost any problem in Statistics.

    • Computing standard errors,
    • Bias estimation and adjustment,
    • Confidence intervals,
    • Significance tests, …
  • We begin with the easiest and best known case: estimating the standard error (that is the square root of the variance) of an estimator.

Precision of an estimate (1)

  • Assume we want to estimate some parameter \(\theta\), that can be expressed as \(\theta (F)\), where \(F\) is the distribution function of each \(X_i\) in \((X_1,X_2,...,X_n)\).
  • For example:

\[\begin{eqnarray*} \theta &=& E_F(X)=\theta (F) \\ \theta &=& Med(X)=\{m:P_F(X\leq m)=1/2\}= \theta (F). \end{eqnarray*}\]

Plug-in estimates

  • To estimate \(\theta(F)\) we usually rely on plug-in estimators: \(\hat{\theta}=\theta (F_n)\):

\[\begin{eqnarray*} \hat{\theta}&=&\overline{X}=\int XdF_n(x)=\frac 1n\sum_{i=1}^nx_i=\theta (F_n) \\ \hat{\theta}&=&\widehat{Med}(X)=\{m:\frac{\#x_i\leq m}n=1/2\}=\theta (F_n) \end{eqnarray*}\]

Precision of an estimate (1)

  • An important when computing an estimator \(\hat \theta\) of a parameter \(\theta\) is how precise is \(\hat \theta\) as an estimator of \(\theta\)?

    • With the sample mean, \(\overline{X}\), the standard error estimation is immediate because the expression of the variance estimator is known: $ _{}= $
    • So, a natural estimator of the standard error of \(\overline{X}\) is: \(\hat\sigma_\overline{X}=\frac{\hat{\sigma}(X)}{\sqrt{n}}\)

Precision of an estimate (2)

  • If, as in this case, the variance of \(X\) (and, here, that of \(\overline{X}\)) is a functional of \(F\):

\[ \sigma _{\overline{X}}=\frac{\sigma (X)}{\sqrt{n}}=\frac{\sqrt{\int [x-\int x\,dF(x)]\sp 2dF(x)}}{\sqrt{n}}=\sigma _{\overline{X}}(F) \]

then, the standard error estimator is the same functional applied on \(F_n\), that is:

\[ \hat{\sigma}_{\overline{X}}=\frac{\hat{\sigma}(X)}{\sqrt{n}}=\frac{\sqrt{1/n\sum_{i=1}^n(x_i-\overline{x})^2}}{\sqrt{n}}=\sigma _{\overline{X}}(F_n). \]

Standard error estimation

  • Thus, a way to obtain a standard error estimator \(\widehat{\sigma}_{\widehat{\theta}}\) of an estimator \(\widehat{\theta}\) consists on replacing \(F\) with \(F_n\) in the ``population’’ standard error expression of \(\hat \theta\), \(\displaystyle{\sigma_{\hat \theta}= \sigma_{\hat \theta}(F)}\), whenever it is known.
  • In a schematic form: \[ \sigma_{\hat \theta}= \sigma_{\hat \theta}(F) \Longrightarrow \sigma_{\hat \theta}(F_n)= \widehat{\sigma}_{\hat \theta}. \] That is, the process consists of “plugging-in” \(F_n\) in the (known) functional form, \(\sigma_{\hat \theta}(F)\) that defines \(\sigma_{\hat \theta}\)}.

The bootstrap (1)

  • The previous approach, \(F\simeq F_n \Longrightarrow \sigma_{\hat \theta}(F) \simeq \sigma_{\hat \theta}(F_n)\) presents the obvious drawback that, when the functional form \(\sigma _{\hat{\theta}}(F)\) is unknown, it is not possible to carry out the substitution of \(F\) by \(F_n\).
  • This is, for example, the case of standard error of the median or that of the correlation coefficient.

The bootstrap (2)

  • The bootstrap method makes it possible to do the desired approximation: \[\hat{\sigma}_{\hat\theta} \simeq \sigma _{\hat\theta}(F_n)\] without having to to know the form of \(\sigma_{\hat\theta}(F)\).

  • To do this,the bootstrap estimates, or directly approaches \(\sigma_{\hat{\theta}}(F_n)\) over the sample.

Bootstrap sampling (resampling)

  • The bootstrap allows to estimate the standard error from samples of \(F_n\), that is,

  • Substituting \(F_n\) by \(F\) carried out in the sampling step.

\[\begin{eqnarray*} &&\mbox{Instead of: } \\ && \quad F\stackrel{s.r.s}{\longrightarrow }{\bf X} = (X_1,X_2,\dots, X_n) \, \quad (\hat \sigma_{\hat\theta} =\underbrace{\sigma_\theta(F_n)}_{unknown}) \\ && \mbox{It is done: } \\ && \quad F_n\stackrel{s.r.s}{\longrightarrow }\quad {\bf X^{*}}=(X_1^{*},X_2^{*}, \dots ,X_n^{*}) \quad (\hat \sigma_{\hat\theta}= \hat \sigma_{\hat \theta}^* \simeq \sigma_{\hat \theta}^*). \end{eqnarray*}\]

Bootstrap resampling (2)

  • Here, \(\sigma_{\hat \theta}^*\) is the bootstrap standard error of \(\hat \theta\) and

  • \(\hat \sigma_{\hat \theta}^*\) the bootstrap estimate of the standard error of \(\hat \theta\).

  • That is, the new (re-)sampling process consists of extracting samples of size \(n\) of \(F_n\):
    \({\bf X^{*}}=(X_1^{*},X_2^{*},\dots ,X_n^{*})\) is a random sample of size \(n\) obtained with replacement from the original sample \((X_1,X_2,\dots ,X_n)\).

  • Samples \({\bf X^*}\), obtained through this procedure are called bootstrap samples or re-samples.

The bootstrap distribution

  • The distribution of a statistic computed from re-samples is called the bootstrap distribution,

\[\begin{eqnarray*} \mathcal {L}(\hat \theta)&\simeq& P_F(\hat\theta \leq t): \mbox{Sampling distribution of } \hat \theta,\\ \mathcal {L}(\hat \theta^*)&\simeq& P_{F_n}(\hat\theta^* \leq t): \mbox{Bootstrap distribution of } \hat \theta, \end{eqnarray*}\]

  • This distribution is usually not known.

  • However the sampling process and the calculation of the statistics can be approximated using a Monte Carlo Algorithm.

Bootstrap Monte Carlo Algorithm

  1. Draw a bootstrap sample, \({\bf x}_1^{*}\) from \(F_n\) and compute \(\hat{\theta}({\bf x}_1^{*})\).
  2. Repeat (1) \(B\) times yielding \(\hat{\theta}({\bf x}_2^{*})\), \(\dots\), \(\hat{\theta}({\bf x}_B^{*})\) estimates.
  3. Compute: \[\begin{equation*} \hat{\sigma}_B (\hat\theta)= \sqrt{ \frac{ \sum_{b=1}^B\left( \hat{\theta}(% {\bf x^{*}_i})-\overline{\hat{\theta}^*}\right) ^2 }{ (B-1) } }, \quad \overline{\hat{\theta}^*}\equiv \frac 1B\sum_{b=1}^B\hat{\theta}\left( {\bf x}% _b^{*}\right) \end{equation*}\]

Bootstrap Estimates of SE

  • Main idea is that the bootstrap standard error of \(\hat\theta\), \(\sigma_B(\hat\theta)\) can be approximated by \(\hat{\sigma}_B (\hat\theta)\).

\[ \mbox{if }B\rightarrow\infty \mbox{ then } \hat{\sigma}_B (\hat\theta) \rightarrow \hat\sigma_{\infty} (\hat\theta) =\sigma_B(\hat\theta)=\sigma_{\hat\theta}(F_n). \]

The bootstrap approximation, \(\hat{\sigma}_B(\hat\theta)\), to the bootstrap SE, \(\sigma_B(\hat\theta)\), provides an estimate of \(\sigma_{\hat\theta}(F_n)\):

\[ \hat{\sigma}_B(\hat\theta)(\simeq \sigma_B(\hat\theta)=\sigma_{\hat\theta}(F_n))\simeq\hat \sigma_{\hat\theta}(F_n). \]

Summary

From real world to bootstrap world:

Back to bagging

  • Breiman (Breiman 1996) combined the ideas of:
    • Averaging provides decreased variance estimates,
    • Bootstrap provides multiple (re)samples.
  • He suggested: bootstrap aggregating :
    • Take resamples from the original training dataset
    • Learn the model on each bootstrapped training set to get a prediction \(\hat f^{*b}(x)\).
    • Use the boostrap estimates to obtain improved prediction/classification.

Bagging prediction/classifier

  • For regression (trees) the bagged estimate is the average prediction at \(x\) from these \(B\) trees.

\[\hat f_{bag}(x)=\frac 1B \sum_{b=1}^B \hat f^{*b}(x) \]

  • For classification (trees) the bagged classifier selects the class with the most “votes” from the \(B\) trees:

\[ \hat G_{bag}(x) = \arg \max_k \hat f_{bag}(x). \]

Out-Of-Bag observations

  • Every time a resample is taken with replacement, some observations are ommitted, due to the multiple occurring of others.
  • These out-of-bag (OOB) observations can be used to build an estimate of prediction error.

Out-Of-Bag error estimates

Since each out-of-bag set is not used to train the model, it can be used to evaluate performance.

  1. Find all trees that are not trained by the OOB instance.
  2. Take the majority vote of these trees for the OOB instance, compared to the true value of the OOB instance.
  3. Compile OOB error for all instances in the OOB dataset.

Illustration of OOB EE

Source: https://www.baeldung.com/cs/random-forests-out-of-bag-error

Bagging in R (1.1)

  • This exampe relies on the well-known AmesHousing dataset on house prices in Ames, IA.

  • We use libraries:

    • rpart for stratified resampling
    • ipred for bagging.
# Prepare "clean" dataset from raw data
ames <- AmesHousing::make_ames()

# Split in test/training
set.seed(123)
split <- rsample::initial_split(ames, prop = 0.7, 
                       strata = "Sale_Price")
ames_train  <- rsample::training(split)
ames_test   <- rsample::testing(split)

Bagging in R (1.2)

system.time(
ames_bag1 <- ipred::bagging(
  formula = Sale_Price ~ .,
  data = ames_train,
  nbagg = 100,  coob = TRUE,
  control = rpart::rpart.control(minsplit = 2, cp = 0)
)
)
#   user  system elapsed 
#  40.16    0.15   40.34


show(ames_bag1)
# Bagging regression trees with 100 bootstrap replications 
# 
# Call: bagging.data.frame(formula = Sale_Price ~ ., data = ames_train, 
#     nbagg = 100, coob = TRUE, control = rpart.control(minsplit = 2, 
#         cp = 0))
# 
# Out-of-bag estimate of root mean squared error:  26350.91 

Interpetability: The “achiles heel”

  • Trees may have a straightforward interpretation,
    • Plotting the tree provides information about
      • which variables are important
      • how they act on the prediction
  • Ensembles are less intuitive because
    • there is no consensus tree.
    • not clear which variables are most important

Variable importance

  • A complementary way to interpret a tree is by quantifying how important is each feature.

  • Done measuring the total reduction in loss function associated with each variable across all splits.

  • This measure can be extended to an ensemble simply by adding up variable importance over all trees built.

Variable importance example

  • If bagging is performed with caret
  • the vip function from the vip package can be used (see lab examples).

Random Forests

Random forests: decorrelating predictors

  • Bagged trees, based on re-samples (of the same sample) tend to be highly correlated.
  • To get away from this Breimann introduced Random forests, that use a “clever trick” that decorrelates trees:
    • When growing a tree from one bootstrap sample,
    • At each split use only a randomly selected subset of predictors.

Random forests

How many variables per split?

  • The usual recommendation for random selection of variables at each split has been studied by simulation:
    • For regression default value is \(m=p/3\)
    • For classification default value is \(m=\sqrt{p}\).
  • Alternatively the number \(m\) can be chosen using cross-validation.

Random forest algorithm

Random Forests Algorithm, from chapter 17 in (Hastie and Efron 2016)

Out-of-the box performance

  • Random forests have become popular because they tend to provide very good out-of-the-box performance, that is:
    • Although they have several hyperparameters that can be tuned,
    • the default values tend to produce good results.
  • Moreover, among the more popular machine learning algorithms, random forests have the least variability in their prediction accuracy when tuning (Probst, Wright, and Boulesteix 2019).

Out of the box performance

  • Training a random forest model with all hyperparameters set to their default values, we get an OOB RMSE that is better than many other classifiers, with or without tuning.
  • This combined with good stability and ease-of-use has made it the option of choice for many problems

Out of the box performance example

# number of features
n_features <- length(setdiff(names(ames_train), "Sale_Price"))

# train a default random forest model
ames_rf1 <- ranger(
  Sale_Price ~ ., 
  data = ames_train,
  mtry = floor(n_features / 3),
  respect.unordered.factors = "order",
  seed = 123
)

# get OOB RMSE
(default_rmse <- sqrt(ames_rf1$prediction.error))
## [1] 24859.27

Tuning hyperparameters

There are several parametres that, appropriately tuned, can improve RF performance.

  1. The number of trees in the forest.
  2. The number of features to consider at any given split (\(m_{try}\)).
  3. The complexity of each tree.
  4. The sampling scheme.
  5. The splitting rule to use during tree construction.

1 and 2 tend to have the largest impact on predictive accuracy.

Random forests in bioinformatics

  • Random forests have been thoroughly used in Bioinformatics. See (Boulesteix et al. 2012).
  • Bioinformatics data are often high dimensional with
    • dozens or (less often) hundreds of samples/individuals
    • thousands (or hundreds of thousands) of variables.

Application of Random forests

  • Random forests provide robust classifers for instance for
    • Distinguishing cancer from non cancer
    • Predicting tumor type in cancer of unknown origin
    • Selecting variables (SNPs) in Genome Wide Association Studies
  • Some variation of Random forests are used only for variable selection

References

Boulesteix, Anne Laure, Silke Janitza, Jochen Kruppa, and Inke R. König. 2012. “Overview of Random Forest Methodology and Practical Guidance with Emphasis on Computational Biology and Bioinformatics.” Undefined 2 (November): 493–507. https://doi.org/10.1002/WIDM.1072.
Breiman, Leo. 1996. “Bagging Predictors.” Machine Learning 24: 123–40. https://doi.org/10.1007/BF00058655/METRICS.
Efron, B. 1979. Bootstrap Methods: Another Look at the Jackknife.” The Annals of Statistics 7 (1): 1–26. https://doi.org/10.1214/aos/1176344552.
Hastie, T., and B. Efron. 2016. Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge University Press.
Probst, Philipp, Marvin N. Wright, and Anne-Laure Boulesteix. 2019. “Hyperparameters and Tuning Strategies for Random Forest.” WIREs Data Mining and Knowledge Discovery 9 (3): e1301. https://doi.org/https://doi.org/10.1002/widm.1301.